Goto

Collaborating Authors

 ai ethicist


To Post or Not to Post: AI Ethics in the Age of Big Tech

Communications of the ACM

What is the role of an ethicist? Is it to be an impartial observer? A guide to what is good or bad? Here, I will explore the different roles in the context of AI ethics through the terms descriptive, normative, and action AI ethics. AI ethics is a specific field of applied ethics nested in technology ethics and computer ethics.30


Can AI be held accountable? AI ethicist on tech giants and the AI boom

Al Jazeera

Tech companies and countries across the globe are racing to develop more advanced Artificial Intelligence. As this technology becomes more entrenched in everyday life, there are growing concerns over AI amplifying misinformation and being used in government surveillance and war. So where does the current boom leave efforts to keep AI in check? And how is the growing influence of tech billionaires shaping global politics? Marc Lamont Hill speaks to the CEO of Humane Intelligence, and former Machine Learning Ethics director at Twitter, Rumman Chowdhury.


Three Kinds of AI Ethics

Ratti, Emanuele

arXiv.org Artificial Intelligence

There is an overwhelming abundance of works in AI Ethics. This growth is chaotic because of how sudden it is, its volume, and its multidisciplinary nature. This makes difficult to keep track of debates, and to systematically characterize goals, research questions, methods, and expertise required by AI ethicists. In this article, I show that the relation between AI and ethics can be characterized in at least three ways, which correspond to three well-represented kinds of AI ethics: ethics and AI; ethics in AI; ethics of AI. I elucidate the features of these three kinds of AI Ethics, characterize their research questions, and identify the kind of expertise that each kind needs. I also show how certain criticisms to AI ethics are misplaced, as being done from the point of view of one kind of AI ethics, to another kind with different goals. All in all, this work sheds light on the nature of AI ethics, and sets the groundwork for more informed discussions about the scope, methods, and training of AI ethicists.


The Machine Ethics podcast: Running faster with Enrico Panai

AIHub

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This episode we're chatting with Enrico Panai about the elements of the digital revolution, AI transforming data into information, human-computer interaction, the importance of knowing the tech as a tech philosopher, that ethicists should diagnose not judge, quality and making pasta, whether ethics is really a burden for companies or if you can run faster with ethics, and finding a Marx for the digital world. Enrico Panai is an AI ethicist with a background in philosophy and extensive consulting experience in Italy. He spent seven years as an adjunct professor of Digital Humanities at the University of Sassari. Since moving to France in 2007, he has continued his work as a consultant.


How Pope Francis became the AI ethicist for tech titans and world leaders

Washington Post - Technology News

The European Union is readying a landmark antitrust law that could limit more advanced generative AI models. The Federal Trade Commission is investigating a deal that Microsoft made with the AI start-up Inflection, probing whether the tech giant deliberately set up the investment to avoid a merger review. And U.S. enforcers reached a deal that will open the company to greater scrutiny of how it wields power to dominate artificial intelligence, including its multibillion-dollar investments in ChatGPT maker OpenAI. That relationship has also exposed Microsoft to new reputational risks, as OpenAI chief executive Sam Altman frequently invites controversy.


Digital recreations of dead people need urgent regulation, AI ethicists say

The Guardian

Digital recreations of dead people are on the cusp of reality and urgently need regulation, AI ethicists have argued, warning "deadbots" could cause psychological harm to, and even "haunt", their creators and users. Such services, which are already technically possible to create and legally permissible, could let users upload their conversations with dead relatives to "bring grandma back to life" in the form of a chatbot, researchers from the University of Cambridge suggest. They may be marketed at parents with terminal diseases who want to leave something behind for their child to interact with, or simply sold to still-healthy people who want to catalogue their entire life and create an interactive legacy. But in each case, unscrupulous companies and thoughtless business practices could cause lasting psychological harm and fundamentally disrespect the rights of the deceased, the paper argues. "Rapid advancements in generative AI mean that nearly anyone with internet access and some basic knowhow can revive a deceased loved one," said Dr Katarzyna Nowaczyk-Basińska, one of the study's co-authors at Cambridge's Leverhulme centre for the future of intelligence (LCFI).


Towards a Feminist Metaethics of AI

Siapka, Anastasia

arXiv.org Artificial Intelligence

The proliferation of Artificial Intelligence (AI) has sparked an overwhelming number of AI ethics guidelines, boards and codes of conduct. These outputs primarily analyse competing theories, principles and values for AI development and deployment. However, as a series of recent problematic incidents about AI ethics/ethicists demonstrate, this orientation is insufficient. Before proceeding to evaluate other professions, AI ethicists should critically evaluate their own; yet, such an evaluation should be more explicitly and systematically undertaken in the literature. I argue that these insufficiencies could be mitigated by developing a research agenda for a feminist metaethics of AI. Contrary to traditional metaethics, which reflects on the nature of morality and moral judgements in a non-normative way, feminist metaethics expands its scope to ask not only what ethics is but also what our engagement with it should be like. Applying this perspective to the context of AI, I suggest that a feminist metaethics of AI would examine: (i) the continuity between theory and action in AI ethics; (ii) the real-life effects of AI ethics; (iii) the role and profile of those involved in AI ethics; and (iv) the effects of AI on power relations through methods that pay attention to context, emotions and narrative.


The AI Ethicist's Dirty Hands Problem

Communications of the ACM

Assume an AI ethicist uncovers objectionable effects related to the increased usage of AI. What should they do about it? One option is to seek alliances with Big Tech in order to "borrow" their power to change things for the better. Another option is to seek opportunities for change that actively avoids reliance on Big Tech. The choice between these two strategies gives rise to an ethical dilemma.


How to survive as an AI ethicist

#artificialintelligence

To receive The Algorithm newsletter in your inbox every Monday, sign up here. It's never been more important for companies to ensure that their AI systems function safely, especially as new laws to hold them accountable kick in. The responsible AI teams they set up to do that are supposed to be a priority, but investment in it is still lagging behind. People working in the field suffer as a result, as I found in my latestpiece. Organizations place huge pressure on individuals to fix big, systemic problems without proper support, while they often face a near-constant barrage of aggressive criticism online.


How to survive as an AI ethicist

MIT Technology Review

It's never been more important for companies to ensure that their AI systems function safely, especially as new laws to hold them accountable kick in. The responsible AI teams they set up to do that are supposed to be a priority, but investment in it is still lagging behind. People working in the field suffer as a result, as I found in my latest piece. Organizations place huge pressure on individuals to fix big, systemic problems without proper support, while they often face a near-constant barrage of aggressive criticism online. The problem also feels very personal--AI systems often reflect and exacerbate the worst aspects of our societies, such as racism and sexism. The problematic technologies range from facial recognition systems that classify Black people as gorillas to deepfake software used to make porn videos of women who have not consented.